58 research outputs found

    Multimodal classification of driver glance

    Get PDF
    —This paper presents a multimodal approach to invehicle classification of driver glances. Driver glance is a strong predictor of cognitive load and is a useful input to many applications in the automotive domain. Six descriptive glance regions are defined and a classifier is trained on video recordings of drivers from a single low-cost camera. Visual features such as head orientation, eye gaze and confidence ratings are extracted, then statistical methods are used to perform failure analysis and calibration on the visual features. Non-visual features such as steering wheel angle and indicator position are extracted from a RaceLogic VBOX system. The approach is evaluated on a dataset containing multiple 60 second samples from 14 participants recorded while driving in a natural environment. We compare our multimodal approach to separate unimodal approaches using both Support Vector Machine (SVM) and Random Forests (RF) classifiers. RF Mean Decrease in Gini Index is used to rank selected features which gives insight into the selected features and improves the classifier performance. We demonstrate that our multimodal approach yields significantly higher results than unimodal approaches. The final model achieves an average F1 score of 70.5% across the six classes

    Analysis of yawning behaviour in spontaneous expressions of drowsy drivers

    Get PDF
    Driver fatigue is one of the main causes of road accidents. It is essential to develop a reliable driver drowsiness detection system which can alert drivers without disturbing them and is robust to environmental changes. This paper explores yawning behaviour as a sign of drowsiness in spontaneous expressions of drowsy drivers in simulated driving scenarios. We analyse a labelled dataset of videos of sleep-deprived versus alert drivers and demonstrate the correlation between hand-over-face touches, face occlusions and yawning. We propose that face touches can be used as a novel cue in automated drowsiness detection alongside yawning and eye behaviour. Moreover, we present an automatic approach to detect yawning based on extracting geometric and appearance features of both mouth and eye regions. Our approach successfully detects both hand-covered and uncovered yawns with an accuracy of 95%. Ultimately, our goal is to use these results in designing a hybrid drowsiness-detection system

    Developing predictive equations to model the visual demand of in-vehicle touchscreen HMIs

    Get PDF
    Touchscreen HMIs are commonly employed as the primary control interface and touch-point of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully-functioning prototypes, test rigs and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modelling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick-Hyman Law decision/search time and Fitts’ Law pointing time, and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behaviour associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimise visual demand
    • 

    corecore